431 research outputs found
PID Control of Biochemical Reaction Networks
Principles of feedback control have been shown to naturally arise in
biological systems and successfully applied to build synthetic circuits. In
this work we consider Biochemical Reaction Networks (CRNs) as a paradigm for
modelling biochemical systems and provide the first implementation of a
derivative component in CRNs. That is, given an input signal represented by the
concentration level of some species, we build a CRN that produces as output the
concentration of two species whose difference is the derivative of the input
signal. By relying on this component, we present a CRN implementation of a
feedback control loop with Proportional-Integral-Derivative (PID) controller
and apply the resulting control architecture to regulate the protein expression
in a microRNA regulated gene expression model.Comment: 8 Pages, 4 figures, Submitted to CDC 201
Promises of Deep Kernel Learning for Control Synthesis
Deep Kernel Learning (DKL) combines the representational power of neural
networks with the uncertainty quantification of Gaussian Processes. Hence, it
is potentially a promising tool to learn and control complex dynamical systems.
In this work, we develop a scalable abstraction-based framework that enables
the use of DKL for control synthesis of stochastic dynamical systems against
complex specifications. Specifically, we consider temporal logic specifications
and create an end-to-end framework that uses DKL to learn an unknown system
from data and formally abstracts the DKL model into an Interval Markov Decision
Process (IMDP) to perform control synthesis with correctness guarantees.
Furthermore, we identify a deep architecture that enables accurate learning and
efficient abstraction computation. The effectiveness of our approach is
illustrated on various benchmarks, including a 5-D nonlinear stochastic system,
showing how control synthesis with DKL can substantially outperform
state-of-the-art competitive methods.Comment: 9 pages, 4 figures, 3 table
Formal Abstraction of General Stochastic Systems via Noise Partitioning
Verifying the performance of safety-critical, stochastic systems with complex
noise distributions is difficult. We introduce a general procedure for the
finite abstraction of nonlinear stochastic systems with non-standard (e.g.,
non-affine, non-symmetric, non-unimodal) noise distributions for verification
purposes. The method uses a finite partitioning of the noise domain to
construct an interval Markov chain (IMC) abstraction of the system via
transition probability intervals. Noise partitioning allows for a general class
of distributions and structures, including multiplicative and mixture models,
and admits both known and data-driven systems. The partitions required for
optimal transition bounds are specified for systems that are monotonic with
respect to the noise, and explicit partitions are provided for affine and
multiplicative structures. By the soundness of the abstraction procedure,
verification on the IMC provides guarantees on the stochastic system against a
temporal logic specification. In addition, we present a novel refinement-free
algorithm that improves the verification results. Case studies on linear and
nonlinear systems with non-Gaussian noise, including a data-driven example,
demonstrate the generality and effectiveness of the method without introducing
excessive conservatism.Comment: 6 pages, 6 figures, submitted jointly to IEEE Control Systems Letters
and 2024 AC
Adversarial Robustness Certification for Bayesian Neural Networks
We study the problem of certifying the robustness of Bayesian neural networks
(BNNs) to adversarial input perturbations. Given a compact set of input points
and a set of output points , we define two notions of robustness for BNNs in an adversarial
setting: probabilistic robustness and decision robustness. Probabilistic
robustness is the probability that for all points in the output of a BNN
sampled from the posterior is in . On the other hand, decision robustness
considers the optimal decision of a BNN and checks if for all points in the
optimal decision of the BNN for a given loss function lies within the output
set . Although exact computation of these robustness properties is
challenging due to the probabilistic and non-convex nature of BNNs, we present
a unified computational framework for efficiently and formally bounding them.
Our approach is based on weight interval sampling, integration, and bound
propagation techniques, and can be applied to BNNs with a large number of
parameters, and independently of the (approximate) inference method employed to
train the BNN. We evaluate the effectiveness of our methods on various
regression and classification tasks, including an industrial regression
benchmark, MNIST, traffic sign recognition, and airborne collision avoidance,
and demonstrate that our approach enables certification of robustness and
uncertainty of BNN predictions
Individual Fairness in Bayesian Neural Networks
We study Individual Fairness (IF) for Bayesian neural networks (BNNs).
Specifically, we consider the --individual fairness notion,
which requires that, for any pair of input points that are -similar
according to a given similarity metrics, the output of the BNN is within a
given tolerance We leverage bounds on statistical sampling over the
input space and the relationship between adversarial robustness and individual
fairness to derive a framework for the systematic estimation of
--IF, designing Fair-FGSM and Fair-PGD as
global,fairness-aware extensions to gradient-based attacks for BNNs. We
empirically study IF of a variety of approximately inferred BNNs with different
architectures on fairness benchmarks, and compare against deterministic models
learnt using frequentist techniques. Interestingly, we find that BNNs trained
by means of approximate Bayesian inference consistently tend to be markedly
more individually fair than their deterministic counterparts
Statistical Guarantees for the Robustness of Bayesian Neural Networks
We introduce a probabilistic robustness measure for Bayesian Neural Networks
(BNNs), defined as the probability that, given a test point, there exists a
point within a bounded set such that the BNN prediction differs between the
two. Such a measure can be used, for instance, to quantify the probability of
the existence of adversarial examples. Building on statistical verification
techniques for probabilistic models, we develop a framework that allows us to
estimate probabilistic robustness for a BNN with statistical guarantees, i.e.,
with a priori error and confidence bounds. We provide experimental comparison
for several approximate BNN inference techniques on image classification tasks
associated to MNIST and a two-class subset of the GTSRB dataset. Our results
enable quantification of uncertainty of BNN predictions in adversarial
settings.Comment: 9 pages, 6 figure
- …